Goto

Collaborating Authors

 offline reinforcement




A Author Statement 506 The authors of this work would like to state that we bear full responsibility for any potential violation

Neural Information Processing Systems

Table 3 presents the details of datasets in HoK1v1 task. Spells set to frenzy . Generally, a level of "1" is used for datasets with the "norm" prefix, while a level This distinction indicates varying levels of difficulty. In the Generalization category, "norm_general" and "hard_general," have their corresponding datasets. For example, to sample the "norm_general" dataset, we let the level-1 model fight with level-0, level-542 For example, in the "norm_hero_general" experiment, we directly use the model trained on "norm_medium" dataset only contains the fixed default hero "luban."




Rethinking Optimal Transport in Offline Reinforcement Learning

Neural Information Processing Systems

We propose a novel algorithm for offline reinforcement learning using optimal transport. Typically, in offline reinforcement learning, the data is provided by various experts and some of them can be sub-optimal. To extract an efficient policy, it is necessary to \emph{stitch} the best behaviors from the dataset. To address this problem, we rethink offline reinforcement learning as an optimal transportation problem. And based on this, we present an algorithm that aims to find a policy that maps states to a \emph{partial} distribution of the best expert actions for each given state. We evaluate the performance of our algorithm on continuous control problems from the D4RL suite and demonstrate improvements over existing methods.


Adversarial Fine-tuning in Offline-to-Online Reinforcement Learning for Robust Robot Control

Ayabe, Shingo, Kera, Hiroshi, Kawamoto, Kazuhiko

arXiv.org Artificial Intelligence

Offline reinforcement learning enables sample-efficient policy acquisition without risky online interaction, yet policies trained on static datasets remain brittle under action-space perturbations such as actuator faults. This study introduces an offline-to-online framework that trains policies on clean data and then performs adversarial fine-tuning, where perturbations are injected into executed actions to induce compensatory behavior and improve resilience. A performance-aware curriculum further adjusts the perturbation probability during training via an exponential-moving-average signal, balancing robustness and stability throughout the learning process. Experiments on continuous-control locomotion tasks demonstrate that the proposed method consistently improves robustness over offline-only baselines and converges faster than training from scratch. Matching the fine-tuning and evaluation conditions yields the strongest robustness to action-space perturbations, while the adaptive curriculum strategy mitigates the degradation of nominal performance observed with the linear curriculum strategy. Overall, the results show that adversarial fine-tuning enables adaptive and robust control under uncertain environments, bridging the gap between offline efficiency and online adaptability.